我们引入了来自多个机器人手的对象的神经隐式表示。多个机器人手之间的不同抓地力被编码为共享的潜在空间。学会了每个潜在矢量以两个3D形状的签名距离函数来解码对象的3D形状和机器人手的3D形状。此外,学会了潜在空间中的距离度量,以保留不同机器人手之间的graSps之间的相似性,其中根据机器人手的接触区域定义了grasps的相似性。该属性使我们能够在包括人手在内的不同抓地力之间转移抓地力,并且GRASP转移有可能在机器人之间分享抓地力,并使机器人能够从人类那里学习掌握技能。此外,我们隐式表示中对象和grasps的编码符号距离函数可用于6D对象姿势估计,并从部分点云中掌握触点优化,这可以在现实世界中启用机器人抓握。
translated by 谷歌翻译
最近的顺序推荐模型越来越多地依赖连续的短期用户相互作用序列来建模用户兴趣。但是,这些方法引起了人们对短期和长期利益的关注。 (1){\ IT短期}:交互序列可能不是由单一的兴趣引起的,而是来自几个相互交织的利益,即使在短时间内,也导致了它们无法模拟Skip行为的失败; (2){\ it长期}:相互作用序列主要是在离散的间隔内稀疏观察,而不是长期连续的。这使得难以推断长期利益,因为只能考虑到跨序列的利益动态,因此只能得出离散的利息表示。在这项研究中,我们通过学习来解决这些问题(1)短期利益的多尺度表示; (2)长期利益的动态意识表示。为此,我们提出了一个\ textbf {i} nterest \ textbf {d} ynamics建模框架,使用生成\ textbf {n} eural \ textbf {p textbf {p} rocesses,coincined IDNP,以从功能角度来看,以模拟用户兴趣。 IDNP学习了一个全球兴趣函数家族,以定义每个用户的长期兴趣作为功能实例化,从而通过功能连续性表现出兴趣动态。具体而言,IDNP首先将每个用户的短期交互编码为多尺度表示,然后将其汇总为用户上下文。通过将潜在的全球兴趣与用户上下文相结合,IDNP然后重建长期用户兴趣功能,并在即将到来的查询时间段上预测交互。此外,即使相互作用序列受到限制和非连续性,IDNP也可以建模此类兴趣功能。在四个现实世界数据集上进行的广泛实验表明,我们的模型在各种评估指标上的最先进。
translated by 谷歌翻译
最近的研究表明,使用两个阶段监督框架来生成描绘人类对脑电图视觉刺激的感知的图像,指的是脑电图的重建。但是,它们无法重现确切的视觉刺激,因为它是人类指定的图像注释,而不是其数据,它决定了合成的图像是什么。此外,合成的图像通常会遭受嘈杂的脑电图编码和对生成模型的不稳定培训,因此难以识别。取而代之的是,我们提出了一个单阶段的EEG视觉检索范式,其中两种模式的数据与他们的注释相反,使我们能够恢复EEG剪辑的确切视觉刺激。我们通过优化对比度的自我监督目标,最大程度地提高了脑电图编码和相关视觉刺激之间的相互信息,从而带来了两个额外的好处。第一,它使EEG编码能够处理训练期间可见的类别以外的视觉类别,因为学习并非针对课堂注释。此外,不再需要模型来生成视觉刺激的每个细节,而是专注于交叉模式对齐并在实例级别检索图像,从而确保可区分的模型输出。经验研究是对最大的单个主体EEG数据集进行的,该数据集测量了图像刺激引起的大脑活动。我们证明了所提出的方法完成了实例级的EEG视觉检索任务,现有方法无法进行。我们还研究了一系列脑电图和视觉编码器结构的含义。此外,对于大多数研究的语义级EEG视觉分类任务,尽管不使用班级注释,但该方法的表现优于最先进的监督EEG视觉重建方法,尤其是关于开放类识别的能力。
translated by 谷歌翻译
扬声器验证系统已被广泛用于智能手机和物联网设备以识别合法用户。在最近的工作中,已经表明,诸如FakeBob之类的对抗性攻击可以有效地针对说话者验证系统。本文的目的是设计一个可以将原始音频与受对抗攻击污染的音频区分开的检测器。具体而言,我们设计的检测器(称为MEH-Fest)从音频的短时傅立叶变换中计算出高频的最小能量,并将其用作检测度量。通过分析和实验,我们表明我们提出的检测器易于实施,快速处理输入音频,并有效地确定音频是否被假屁股攻击损坏。实验结果表明,检测器非常有效:在高斯混合物模型(GMM)和I-vector Speaker验证系统中检测假雄性攻击的情况接近零的假阳性和假阴性率。此外,讨论和研究了对我们提议的探测器的适应性对抗性攻击,并研究了他们的对策,展示了攻击者和后卫之间的比赛。
translated by 谷歌翻译
Few Shot Instance Segmentation (FSIS) requires models to detect and segment novel classes with limited several support examples. In this work, we explore a simple yet unified solution for FSIS as well as its incremental variants, and introduce a new framework named Reference Twice (RefT) to fully explore the relationship between support/query features based on a Transformer-like framework. Our key insights are two folds: Firstly, with the aid of support masks, we can generate dynamic class centers more appropriately to re-weight query features. Secondly, we find that support object queries have already encoded key factors after base training. In this way, the query features can be enhanced twice from two aspects, i.e., feature-level and instance-level. In particular, we firstly design a mask-based dynamic weighting module to enhance support features and then propose to link object queries for better calibration via cross-attention. After the above steps, the novel classes can be improved significantly over our strong baseline. Additionally, our new framework can be easily extended to incremental FSIS with minor modification. When benchmarking results on the COCO dataset for FSIS, gFSIS, and iFSIS settings, our method achieves a competitive performance compared to existing approaches across different shots, e.g., we boost nAP by noticeable +8.2/+9.4 over the current state-of-the-art FSIS method for 10/30-shot. We further demonstrate the superiority of our approach on Few Shot Object Detection. Code and model will be available.
translated by 谷歌翻译
In this chapter, we review and discuss the transformation of AI technology in HCI/UX work and assess how AI technology will change how we do the work. We first discuss how AI can be used to enhance the result of user research and design evaluation. We then discuss how AI technology can be used to enhance HCI/UX design. Finally, we discuss how AI-enabled capabilities can improve UX when users interact with computing systems, applications, and services.
translated by 谷歌翻译
As one of the most important psychic stress reactions, micro-expressions (MEs), are spontaneous and transient facial expressions that can reveal the genuine emotions of human beings. Thus, recognizing MEs (MER) automatically is becoming increasingly crucial in the field of affective computing, and provides essential technical support in lie detection, psychological analysis and other areas. However, the lack of abundant ME data seriously restricts the development of cutting-edge data-driven MER models. Despite the recent efforts of several spontaneous ME datasets to alleviate this problem, it is still a tiny amount of work. To solve the problem of ME data hunger, we construct a dynamic spontaneous ME dataset with the largest current ME data scale, called DFME (Dynamic Facial Micro-expressions), which includes 7,526 well-labeled ME videos induced by 671 participants and annotated by more than 20 annotators throughout three years. Afterwards, we adopt four classical spatiotemporal feature learning models on DFME to perform MER experiments to objectively verify the validity of DFME dataset. In addition, we explore different solutions to the class imbalance and key-frame sequence sampling problems in dynamic MER respectively on DFME, so as to provide a valuable reference for future research. The comprehensive experimental results show that our DFME dataset can facilitate the research of automatic MER, and provide a new benchmark for MER. DFME will be published via https://mea-lab-421.github.io.
translated by 谷歌翻译
Face Anti-spoofing (FAS) is essential to secure face recognition systems from various physical attacks. However, recent research generally focuses on short-distance applications (i.e., phone unlocking) while lacking consideration of long-distance scenes (i.e., surveillance security checks). In order to promote relevant research and fill this gap in the community, we collect a large-scale Surveillance High-Fidelity Mask (SuHiFiMask) dataset captured under 40 surveillance scenes, which has 101 subjects from different age groups with 232 3D attacks (high-fidelity masks), 200 2D attacks (posters, portraits, and screens), and 2 adversarial attacks. In this scene, low image resolution and noise interference are new challenges faced in surveillance FAS. Together with the SuHiFiMask dataset, we propose a Contrastive Quality-Invariance Learning (CQIL) network to alleviate the performance degradation caused by image quality from three aspects: (1) An Image Quality Variable module (IQV) is introduced to recover image information associated with discrimination by combining the super-resolution network. (2) Using generated sample pairs to simulate quality variance distributions to help contrastive learning strategies obtain robust feature representation under quality variation. (3) A Separate Quality Network (SQN) is designed to learn discriminative features independent of image quality. Finally, a large number of experiments verify the quality of the SuHiFiMask dataset and the superiority of the proposed CQIL.
translated by 谷歌翻译
When using LiDAR semantic segmentation models for safety-critical applications such as autonomous driving, it is essential to understand and improve their robustness with respect to a large range of LiDAR corruptions. In this paper, we aim to comprehensively analyze the robustness of LiDAR semantic segmentation models under various corruptions. To rigorously evaluate the robustness and generalizability of current approaches, we propose a new benchmark called SemanticKITTI-C, which features 16 out-of-domain LiDAR corruptions in three groups, namely adverse weather, measurement noise and cross-device discrepancy. Then, we systematically investigate 11 LiDAR semantic segmentation models, especially spanning different input representations (e.g., point clouds, voxels, projected images, and etc.), network architectures and training schemes. Through this study, we obtain two insights: 1) We find out that the input representation plays a crucial role in robustness. Specifically, under specific corruptions, different representations perform variously. 2) Although state-of-the-art methods on LiDAR semantic segmentation achieve promising results on clean data, they are less robust when dealing with noisy data. Finally, based on the above observations, we design a robust LiDAR segmentation model (RLSeg) which greatly boosts the robustness with simple but effective modifications. It is promising that our benchmark, comprehensive analysis, and observations can boost future research in robust LiDAR semantic segmentation for safety-critical applications.
translated by 谷歌翻译
Panoptic Part Segmentation (PPS) unifies panoptic segmentation and part segmentation into one task. Previous works utilize separated approaches to handle thing, stuff, and part predictions without shared computation and task association. We aim to unify these tasks at the architectural level, designing the first end-to-end unified framework named Panoptic-PartFormer. Moreover, we find the previous metric PartPQ biases to PQ. To handle both issues, we make the following contributions: Firstly, we design a meta-architecture that decouples part feature and things/stuff feature, respectively. We model things, stuff, and parts as object queries and directly learn to optimize all three forms of prediction as a unified mask prediction and classification problem. We term our model as Panoptic-PartFormer. Secondly, we propose a new metric Part-Whole Quality (PWQ) to better measure such task from both pixel-region and part-whole perspectives. It can also decouple the error for part segmentation and panoptic segmentation. Thirdly, inspired by Mask2Former, based on our meta-architecture, we propose Panoptic-PartFormer++ and design a new part-whole cross attention scheme to further boost part segmentation qualities. We design a new part-whole interaction method using masked cross attention. Finally, the extensive ablation studies and analysis demonstrate the effectiveness of both Panoptic-PartFormer and Panoptic-PartFormer++. Compared with previous Panoptic-PartFormer, our Panoptic-PartFormer++ achieves 2% PartPQ and 3% PWQ improvements on the Cityscapes PPS dataset and 5% PartPQ on the Pascal Context PPS dataset. On both datasets, Panoptic-PartFormer++ achieves new state-of-the-art results with a significant cost drop of 70% on GFlops and 50% on parameters. Our models can serve as a strong baseline and aid future research in PPS. Code will be available.
translated by 谷歌翻译